A Calculus for Exploiting Data Parallelism on Recursively De ned Data ( Preliminary Report ) ?
نویسندگان
چکیده
Array based data parallel programming can be generalized in two ways to make it an appropriate paradigm for parallel processing of general recursively de ned data. The rst is the introduction of a parallel evaluation mechanism for dynamically allocated recursively dened data. It achieves the e ect of applying the same function to all the subterms of a given datum in parallel. The second is a new notion of recursion, which we call parallel recursion, for parallel evaluation of recursively de ned data. In contrast with ordinary recursion, which only uses the nal results of the recursive calls of its immediate subterms, the new recursion repeatedly transforms a recursive datum represented by a system of equations to another recursive datum by applying the same function to each of the equation simultaneously, until the nal result is obtained. This mechanism exploits more parallelism and achieves signi cant speedup compared to the conventional parallel evaluation of recursive functions. Based on these observations, we propose a typed lambda calculus for data parallel programming and give an operational semantics that integrates the parallel evaluation mechanism and the new form of recursion in the semantic core of a typed lambda calculus. We also describe an implementation method for massively parallel multicomputers, which makes it possible to execute parallel recursion in the expected performance.
منابع مشابه
SENSITIVITY ANALYSIS OF EFFICIENT AND INEFFICIENT UNITS IN INTEGER-VALUED DATA ENVELOPMENT ANALYSIS
One of the issues in Data Envelopment Analysis (DEA) is sensitivity and stability region of the speci c decision making unit (DMU), included ecient and inecient DMUs. In sensitivity analysis of ecient DMUs,the largest region should be found namely stability region thatdata variations are only for ecient DMU under evaluation and the data for the remainingDMUs are assumed xed. Also ecient DMU u...
متن کاملExploiting Delayed Synchronization Arrivals in Light-Weight Data Parallelism
SPMD Single Program Multiple Data models and other traditional models of data parallelism provide parallelism at the processor level Barrier synchroniza tion is de ned at the level of processors where when a processor arrives at the barrier point early and waits for others to arrive no other useful work is done on that processor Program restructuring is one way of min imizing such latencies How...
متن کاملHardware for Exploiting Data Level Parallelism Written Preliminary Examination II
Current trends in performance improvement favor increasing parallelism over increasing the clock rate for a variety of reasons. Although many forms of parallelism exist, Data Level Parallelism (DLP) is one form which scales well and is particularly useful in scientific as well as media applications. This paper examines four modern architectures for exploiting DLP in media applications: VIRAM, S...
متن کاملExploiting Data-Parallelism on Multicore and SMT Systems for Implementing the Fractal Image Compressing Problem
This paper presents a parallel modeling of a lossy image compression method based on the fractal theory and its evaluation over two versions of dual-core processors: with and without simultaneous multithreading (SMT) support. The idea is to observe the speedup on both configurations when changing application parameters and the number of threads at operating system level. Our target application ...
متن کاملIntegrated Support for Task and Data Parallelism
We present an overview of research at the CRPC designed to provide an e cient, portable programmingmodel for scienti c applications possessing both task and data parallelism. Fortran M programs exploit task parallelism by providing language extensions for user-de ned process management and typed communication channels. A combination of compiler and run-time system support ensures modularity, sa...
متن کامل